Goto

Collaborating Authors

 fine-tuning approach



Llama-based source code vulnerability detection: Prompt engineering vs Fine tuning

Ouchebara, Dyna Soumhane, Dupont, Stéphane

arXiv.org Artificial Intelligence

The significant increase in software production, driven by the acceleration of development cycles over the past two decades, has led to a steady rise in software vulnerabilities, as shown by statistics published yearly by the CVE program. The automation of the source code vulnerability detection (CVD) process has thus become essential, and several methods have been proposed ranging from the well established program analysis techniques to the more recent AI-based methods. Our research investigates Large Language Models (LLMs), which are considered among the most performant AI models to date, for the CVD task. The objective is to study their performance and apply different state-of-the-art techniques to enhance their effectiveness for this task. We explore various fine-tuning and prompt engineering settings. We particularly suggest one novel approach for fine-tuning LLMs which we call Double Fine-tuning, and also test the understudied Test-Time fine-tuning approach. We leverage the recent open-source Llama-3.1 8B, with source code samples extracted from BigVul and PrimeVul datasets. Our conclusions highlight the importance of fine-tuning to resolve the task, the performance of Double tuning, as well as the potential of Llama models for CVD. Though prompting proved ineffective, Retrieval augmented generation (RAG) performed relatively well as an example selection technique. Overall, some of our research questions have been answered, and many are still on hold, which leaves us many future work perspectives. Code repository is available here: https://github.com/DynaSoumhaneOuchebara/Llama-based-vulnerability-detection.


Evaluating Dataset Watermarking for Fine-tuning Traceability of Customized Diffusion Models: A Comprehensive Benchmark and Removal Approach

Wang, Xincheng, Sun, Hanchi, Sun, Wenjun, Xue, Kejun, Zhou, Wangqiu, Zhang, Jianbo, Sun, Wei, Zhu, Dandan, Min, Xiongkuo, Jia, Jun, Fang, Zhijun

arXiv.org Artificial Intelligence

Recent fine-tuning techniques for diffusion models enable them to reproduce specific image sets, such as particular faces or artistic styles, but also introduce copyright and security risks. Dataset watermarking has been proposed to ensure traceability by embedding imperceptible watermarks into training images, which remain detectable in outputs even after fine-tuning. However, current methods lack a unified evaluation framework. To address this, this paper establishes a general threat model and introduces a comprehensive evaluation framework encompassing Universality, Transmissibility, and Robustness. Experiments show that existing methods perform well in universality and transmissibility, and exhibit some robustness against common image processing operations, yet still fall short under real-world threat scenarios. To reveal these vulnerabilities, the paper further proposes a practical watermark removal method that fully eliminates dataset watermarks without affecting fine-tuning, highlighting a key challenge for future research.



We thank all reviewers for their constructive and valuable feedback and are delighted to receive an overall positive

Neural Information Processing Systems

Furthermore, we will integrate all changes according to your suggestions and questions. Figure 1: Evaluation on the DA VIS 2017 validation set. The connection between the inner and outer optimization is also illustrated in Algorithm 1 of the supplementary. The mitigate in line 7 refers to shortcomings and we will improve the understandability of the abstract. The fine-tuning epochs in Table 1 refer to a single update with one image. How to train your MAML.


Review for NeurIPS paper: Node Classification on Graphs with Few-Shot Novel Labels via Meta Transformed Network Embedding

Neural Information Processing Systems

The overall novelty of the proposed model is limited to some extend. I think this module is very similar to meta-GNN. Both of them conduct adaptation on the support set, then do evaluation on the query set, though they employ prototype and MAML respectively. In my view, the overall model stands on the shoulder on some traditional approaches, and seems a bit incremental. Could some other approaches, such as fine-tune (which is often utilized as the comparison with meta-learning), solve this novel label problem?


Countering Backdoor Attacks in Image Recognition: A Survey and Evaluation of Mitigation Strategies

Dunnett, Kealan, Arablouei, Reza, Miller, Dimity, Dedeoglu, Volkan, Jurdak, Raja

arXiv.org Artificial Intelligence

The widespread adoption of deep learning across various industries has introduced substantial challenges, particularly in terms of model explainability and security. The inherent complexity of deep learning models, while contributing to their effectiveness, also renders them susceptible to adversarial attacks. Among these, backdoor attacks are especially concerning, as they involve surreptitiously embedding specific triggers within training data, causing the model to exhibit aberrant behavior when presented with input containing the triggers. Such attacks often exploit vulnerabilities in outsourced processes, compromising model integrity without affecting performance on clean (trigger-free) input data. In this paper, we present a comprehensive review of existing mitigation strategies designed to counter backdoor attacks in image recognition. We provide an in-depth analysis of the theoretical foundations, practical efficacy, and limitations of these approaches. In addition, we conduct an extensive benchmarking of sixteen state-of-the-art approaches against eight distinct backdoor attacks, utilizing three datasets, four model architectures, and three poisoning ratios. Our results, derived from 122,236 individual experiments, indicate that while many approaches provide some level of protection, their performance can vary considerably. Furthermore, when compared to two seminal approaches, most newer approaches do not demonstrate substantial improvements in overall performance or consistency across diverse settings. Drawing from these findings, we propose potential directions for developing more effective and generalizable defensive mechanisms in the future.


Efficient Adaptation of Multilingual Models for Japanese ASR

Bajo, Mark, Fukukawa, Haruka, Morita, Ryuji, Ogasawara, Yuma

arXiv.org Artificial Intelligence

This study explores fine-tuning multilingual ASR (Automatic Speech Recognition) models, specifically OpenAI's Whisper-Tiny, to improve performance in Japanese. While multilingual models like Whisper offer versatility, they often lack precision in specific languages. Conversely, monolingual models like ReazonSpeech excel in language-specific tasks but are less adaptable. Using Japanese-specific datasets and Low-Rank Adaptation (LoRA) along with end-to-end (E2E) training, we fine-tuned Whisper-Tiny to bridge this gap. Our results show that fine-tuning reduced Whisper-Tiny's Character Error Rate (CER) from 32.7 to 20.8 with LoRA and to 14.7 with end-to-end fine-tuning, surpassing Whisper-Base's CER of 20.2. However, challenges with domain-specific terms remain, highlighting the need for specialized datasets. These findings demonstrate that fine-tuning multilingual models can achieve strong language-specific performance while retaining their flexibility. This approach provides a scalable solution for improving ASR in resource-constrained environments and languages with complex writing systems like Japanese.


MiningGPT -- A Domain-Specific Large Language Model for the Mining Industry

Demartini, Kurukulasooriya Fernando ana Gianluca

arXiv.org Artificial Intelligence

Recent advancements of generative LLMs (Large Language Models) have exhibited human-like language capabilities but have shown a lack of domain-specific understanding. Therefore, the research community has started the development of domain-specific LLMs for many domains. In this work we focus on discussing how to build mining domain-specific LLMs, as the global mining industry contributes significantly to the worldwide economy. We report on MiningGPT, a mining domain-specific instruction-following 7B parameter LLM model which showed a 14\% higher mining domain knowledge test score as compared to its parent model Mistral 7B instruct.


Beyond Labels: Aligning Large Language Models with Human-like Reasoning

Kabir, Muhammad Rafsan, Sultan, Rafeed Mohammad, Asif, Ihsanul Haque, Ahad, Jawad Ibn, Rahman, Fuad, Amin, Mohammad Ruhul, Mohammed, Nabeel, Rahman, Shafin

arXiv.org Artificial Intelligence

Aligning large language models (LLMs) with a human reasoning approach ensures that LLMs produce morally correct and human-like decisions. Ethical concerns are raised because current models are prone to generating false positives and providing malicious responses. To contribute to this issue, we have curated an ethics dataset named Dataset for Aligning Reasons (DFAR), designed to aid in aligning language models to generate human-like reasons. The dataset comprises statements with ethical-unethical labels and their corresponding reasons. In this study, we employed a unique and novel fine-tuning approach that utilizes ethics labels and their corresponding reasons (L+R), in contrast to the existing fine-tuning approach that only uses labels (L). The original pre-trained versions, the existing fine-tuned versions, and our proposed fine-tuned versions of LLMs were then evaluated on an ethical-unethical classification task and a reason-generation task. Our proposed fine-tuning strategy notably outperforms the others in both tasks, achieving significantly higher accuracy scores in the classification task and lower misalignment rates in the reason-generation task. The increase in classification accuracies and decrease in misalignment rates indicate that the L+R fine-tuned models align more with human ethics. Hence, this study illustrates that injecting reasons has substantially improved the alignment of LLMs, resulting in more human-like responses. We have made the DFAR dataset and corresponding codes publicly available at https://github.com/apurba-nsu-rnd-lab/DFAR.